Skip to main content

Home/ Open Web/ Group items tagged operating systems

Rss Feed Group items tagged

Paul Merrell

For sale: Systems that can secretly track where cellphone users go around the globe - T... - 0 views

  • Makers of surveillance systems are offering governments across the world the ability to track the movements of almost anybody who carries a cellphone, whether they are blocks away or on another continent. The technology works by exploiting an essential fact of all cellular networks: They must keep detailed, up-to-the-minute records on the locations of their customers to deliver calls and other services to them. Surveillance systems are secretly collecting these records to map people’s travels over days, weeks or longer, according to company marketing documents and experts in surveillance technology.
  • The world’s most powerful intelligence services, such as the National Security Agency and Britain’s GCHQ, long have used cellphone data to track targets around the globe. But experts say these new systems allow less technically advanced governments to track people in any nation — including the United States — with relative ease and precision.
  • It is unclear which governments have acquired these tracking systems, but one industry official, speaking on the condition of anonymity to share sensitive trade information, said that dozens of countries have bought or leased such technology in recent years. This rapid spread underscores how the burgeoning, multibillion-dollar surveillance industry makes advanced spying technology available worldwide. “Any tin-pot dictator with enough money to buy the system could spy on people anywhere in the world,” said Eric King, deputy director of Privacy International, a London-based activist group that warns about the abuse of surveillance technology. “This is a huge problem.”
  • ...9 more annotations...
  • Security experts say hackers, sophisticated criminal gangs and nations under sanctions also could use this tracking technology, which operates in a legal gray area. It is illegal in many countries to track people without their consent or a court order, but there is no clear international legal standard for secretly tracking people in other countries, nor is there a global entity with the authority to police potential abuses.
  • tracking systems that access carrier location databases are unusual in their ability to allow virtually any government to track people across borders, with any type of cellular phone, across a wide range of carriers — without the carriers even knowing. These systems also can be used in tandem with other technologies that, when the general location of a person is already known, can intercept calls and Internet traffic, activate microphones, and access contact lists, photos and other documents. Companies that make and sell surveillance technology seek to limit public information about their systems’ capabilities and client lists, typically marketing their technology directly to law enforcement and intelligence services through international conferences that are closed to journalists and other members of the public.
  • Yet marketing documents obtained by The Washington Post show that companies are offering powerful systems that are designed to evade detection while plotting movements of surveillance targets on computerized maps. The documents claim system success rates of more than 70 percent. A 24-page marketing brochure for SkyLock, a cellular tracking system sold by Verint, a maker of analytics systems based in Melville, N.Y., carries the subtitle “Locate. Track. Manipulate.” The document, dated January 2013 and labeled “Commercially Confidential,” says the system offers government agencies “a cost-effective, new approach to obtaining global location information concerning known targets.”
  • Verint can install SkyLock on the networks of cellular carriers if they are cooperative — something that telecommunications experts say is common in countries where carriers have close relationships with their national governments. Verint also has its own “worldwide SS7 hubs” that “are spread in various locations around the world,” says the brochure. It does not list prices for the services, though it says that Verint charges more for the ability to track targets in many far-flung countries, as opposed to only a few nearby ones. Among the most appealing features of the system, the brochure says, is its ability to sidestep the cellular operators that sometimes protect their users’ personal information by refusing government requests or insisting on formal court orders before releasing information.
  • Verint, which also has substantial operations in Israel, declined to comment for this story. It says in the marketing brochure that it does not use SkyLock against U.S. or Israeli phones, which could violate national laws. But several similar systems, marketed in recent years by companies based in Switzerland, Ukraine and elsewhere, likely are free of such limitations.
  • The tracking technology takes advantage of the lax security of SS7, a global network that cellular carriers use to communicate with one another when directing calls, texts and Internet data. The system was built decades ago, when only a few large carriers controlled the bulk of global phone traffic. Now thousands of companies use SS7 to provide services to billions of phones and other mobile devices, security experts say. All of these companies have access to the network and can send queries to other companies on the SS7 system, making the entire network more vulnerable to exploitation. Any one of these companies could share its access with others, including makers of surveillance systems.
  • Companies that market SS7 tracking systems recommend using them in tandem with “IMSI catchers,” increasingly common surveillance devices that use cellular signals collected directly from the air to intercept calls and Internet traffic, send fake texts, install spyware on a phone, and determine precise locations. IMSI catchers — also known by one popular trade name, StingRay — can home in on somebody a mile or two away but are useless if a target’s general location is not known. SS7 tracking systems solve that problem by locating the general area of a target so that IMSI catchers can be deployed effectively. (The term “IMSI” refers to a unique identifying code on a cellular phone.)
  • (Privacy International has collected several marketing brochures on cellular surveillance systems, including one that refers briefly to SkyLock, and posted them on its Web site. The 24-page SkyLock brochure and other material was independently provided to The Post by people concerned that such systems are being abused.)
  • Another company, Defentek, markets a similar system called Infiltrator Global Real-Time Tracking System on its Web site, claiming to “locate and track any phone number in the world.” The site adds: “It is a strategic solution that infiltrates and is undetected and unknown by the network, carrier, or the target.”
  •  
    The Verint company has very close ties to the Iraeli government. Its former parent company Comverse, was heavily subsidized by Israel and the bulk of its manufacturing and code development was done in Israel. See https://en.wikipedia.org/wiki/Comverse_Technology "In December 2001, a Fox News report raised the concern that wiretapping equipment provided by Comverse Infosys to the U.S. government for electronic eavesdropping may have been vulnerable, as these systems allegedly had a back door through which the wiretaps could be intercepted by unauthorized parties.[55] Fox News reporter Carl Cameron said there was no reason to believe the Israeli government was implicated, but that "a classified top-secret investigation is underway".[55] A March 2002 story by Le Monde recapped the Fox report and concluded: "Comverse is suspected of having introduced into its systems of the 'catch gates' in order to 'intercept, record and store' these wire-taps. This hardware would render the 'listener' himself 'listened to'."[56] Fox News did not pursue the allegations, and in the years since, there have been no legal or commercial actions of any type taken against Comverse by the FBI or any other branch of the US Government related to data access and security issues. While no real evidence has been presented against Comverse or Verint, the allegations have become a favorite topic of conspiracy theorists.[57] By 2005, the company had $959 million in sales and employed over 5,000 people, of whom about half were located in Israel.[16]" Verint is also the company that got the Dept. of Homeland Security contract to provide and install an electronic and video surveillance system across the entire U.S. border with Mexico.  One need not be much of a conspiracy theorist to have concerns about Verint's likely interactions and data sharing with the NSA and its Israeli equivalent, Unit 8200. 
Gary Edwards

Google Wave Operational Transformation (Google Wave Federation Protocol) - 0 views

  • Wave document operations consist of the following mutation components:skipinsert charactersinsert element startinsert element endinsert anti-element startinsert anti-element enddelete charactersdelete element startdelete element enddelete anti-element startdelete anti-element endset attributesupdate attributescommence annotationconclude annotationThe following is a more complex example document operation.skip 3insert element start with tag "p" and no attributesinsert characters "Hi there!"insert element endskip 5delete characters 4From this, one could see how an entire XML document can be represented as a single document operation. 
  • Wave OperationsWave operations consists of a document operation, for modifying XML documents and other non document operations. Non document operations are for tasks such as adding or removing a participant to a Wavelet. We'll focus on document operations here as they are the most central to Wave.It's worth noting that an XML document in Wave can be regarded as a single document operation that can be applied to the empty document.This section will also cover how Wave operations are particularly efficient even in the face of a large number of transforms.XML Document SupportWave uses a streaming interface for document operations. This is similar to an XMLStreamWriter or a SAX handler. The document operation consists of a sequence of ordered document mutations. The mutations are applied in sequence as you traverse the document linearly. Designing document operations in this manner makes it easier to write transformation function and composition function described later.In Wave, every 16-bit Unicode code unit (as used in javascript, JSON, and Java strings), start tag or end tag in an XML document is called an item. Gaps between items are called positions. Position 0 is before the first item. A document operation can contain mutations that reference positions. For example, a "Skip" mutation specifies how many positions to skip ahead in the XML document before applying the next mutation.Wave document operations also support annotations. An annotation is some meta-data associated with an item range, i.e. a start position and an end position. This is particularly useful for describing text formatting and spelling suggestions, as it does not unecessarily complicate the underlying XML document format.
  •  
    Summary: Collaborative document editing means multiple editors being able to edit a shared document at the same time.. Live and concurrent means being able to see the changes another person is making, keystroke by keystroke. Currently, there are already a number of products on the market that offer collaborative document editing. Some offer live concurrent editing, such as EtherPad and SubEthaEdit, but do not offer rich text. There are others that offer rich text, such as Google Docs, but do not offer a seamless live concurrent editing experience, as merge failures can occur. Wave stands as a solution that offers both live concurrent editing and rich text document support.  The result is that Wave allows for a very engaging conversation where you can see what the other person is typing, character by character much like how you would converse in a cafe. This is very much like instant messaging except you can see what the other person is typing, live. Wave also allows for a more productive collaborative document editing experience, where people don't have to worry about stepping on each others toes and still use common word processor functionalities such as bold, italics, bullet points, and headings. Wave is more than just rich text documents. In fact, Wave's core technology allows live concurrent modifications of XML documents which can be used to represent any structured content including system data that is shared between clients and backend systems. To achieve these goals, Wave uses a concurrency control system based on Operational Transformation.
Paul Merrell

Canadian Spies Collect Domestic Emails in Secret Security Sweep - The Intercept - 0 views

  • Canada’s electronic surveillance agency is covertly monitoring vast amounts of Canadians’ emails as part of a sweeping domestic cybersecurity operation, according to top-secret documents. The surveillance initiative, revealed Wednesday by CBC News in collaboration with The Intercept, is sifting through millions of emails sent to Canadian government agencies and departments, archiving details about them on a database for months or even years. The data mining operation is carried out by the Communications Security Establishment, or CSE, Canada’s equivalent of the National Security Agency. Its existence is disclosed in documents obtained by The Intercept from NSA whistleblower Edward Snowden. The emails are vacuumed up by the Canadian agency as part of its mandate to defend against hacking attacks and malware targeting government computers. It relies on a system codenamed PONY EXPRESS to analyze the messages in a bid to detect potential cyber threats.
  • Last year, CSE acknowledged it collected some private communications as part of cybersecurity efforts. But it refused to divulge the number of communications being stored or to explain for how long any intercepted messages would be retained. Now, the Snowden documents shine a light for the first time on the huge scope of the operation — exposing the controversial details the government withheld from the public. Under Canada’s criminal code, CSE is not allowed to eavesdrop on Canadians’ communications. But the agency can be granted special ministerial exemptions if its efforts are linked to protecting government infrastructure — a loophole that the Snowden documents show is being used to monitor the emails. The latest revelations will trigger concerns about how Canadians’ private correspondence with government employees are being archived by the spy agency and potentially shared with police or allied surveillance agencies overseas, such as the NSA. Members of the public routinely communicate with government employees when, for instance, filing tax returns, writing a letter to a member of parliament, applying for employment insurance benefits or submitting a passport application.
  • Chris Parsons, an internet security expert with the Toronto-based internet think tank Citizen Lab, told CBC News that “you should be able to communicate with your government without the fear that what you say … could come back to haunt you in unexpected ways.” Parsons said that there are legitimate cybersecurity purposes for the agency to keep tabs on communications with the government, but he added: “When we collect huge volumes, it’s not just used to track bad guys. It goes into data stores for years or months at a time and then it can be used at any point in the future.” In a top-secret CSE document on the security operation, dated from 2010, the agency says it “processes 400,000 emails per day” and admits that it is suffering from “information overload” because it is scooping up “too much data.” The document outlines how CSE built a system to handle a massive 400 terabytes of data from Internet networks each month — including Canadians’ emails — as part of the cyber operation. (A single terabyte of data can hold about a billion pages of text, or about 250,000 average-sized mp3 files.)
  • ...1 more annotation...
  • The agency notes in the document that it is storing large amounts of “passively tapped network traffic” for “days to months,” encompassing the contents of emails, attachments and other online activity. It adds that it stores some kinds of metadata — data showing who has contacted whom and when, but not the content of the message — for “months to years.” The document says that CSE has “excellent access to full take data” as part of its cyber operations and is receiving policy support on “use of intercepted private communications.” The term “full take” is surveillance-agency jargon that refers to the bulk collection of both content and metadata from Internet traffic. Another top-secret document on the surveillance dated from 2010 suggests the agency may be obtaining at least some of the data by covertly mining it directly from Canadian Internet cables. CSE notes in the document that it is “processing emails off the wire.”
  •  
    " CANADIAN SPIES COLLECT DOMESTIC EMAILS IN SECRET SECURITY SWEEP BY RYAN GALLAGHER AND GLENN GREENWALD @rj_gallagher@ggreenwald YESTERDAY AT 2:02 AM SHARE TWITTER FACEBOOK GOOGLE EMAIL PRINT POPULAR EXCLUSIVE: TSA ISSUES SECRET WARNING ON 'CATASTROPHIC' THREAT TO AVIATION CHICAGO'S "BLACK SITE" DETAINEES SPEAK OUT WHY DOES THE FBI HAVE TO MANUFACTURE ITS OWN PLOTS IF TERRORISM AND ISIS ARE SUCH GRAVE THREATS? NET NEUTRALITY IS HERE - THANKS TO AN UNPRECEDENTED GUERRILLA ACTIVISM CAMPAIGN HOW SPIES STOLE THE KEYS TO THE ENCRYPTION CASTLE Canada's electronic surveillance agency is covertly monitoring vast amounts of Canadians' emails as part of a sweeping domestic cybersecurity operation, according to top-secret documents. The surveillance initiative, revealed Wednesday by CBC News in collaboration with The Intercept, is sifting through millions of emails sent to Canadian government agencies and departments, archiving details about them on a database for months or even years. The data mining operation is carried out by the Communications Security Establishment, or CSE, Canada's equivalent of the National Security Agency. Its existence is disclosed in documents obtained by The Intercept from NSA whistleblower Edward Snowden. The emails are vacuumed up by the Canadian agency as part of its mandate to defend against hacking attacks and malware targeting government computers. It relies on a system codenamed PONY EXPRESS to analyze the messages in a bid to detect potential cyber threats. Last year, CSE acknowledged it collected some private communications as part of cybersecurity efforts. But it refused to divulge the number of communications being stored or to explain for how long any intercepted messages would be retained. Now, the Snowden documents shine a light for the first time on the huge scope of the operation - exposing the controversial details the government withheld from the public. Under Canada's criminal code, CSE is no
Paul Merrell

Testosterone Pit - Home - The Other Reason Why IBM Throws A Billion At Linux ... - 0 views

  • IBM announced today that it would throw another billion at Linux, the open-source operating system, to run its Power System servers. The first time it had thrown a billion at Linux was in 2001, when Linux was a crazy, untested, even ludicrous proposition for the corporate world. So the moolah back then didn’t go to Linux itself, which was free, but to related technologies across hardware, software, and service, including things like sales and advertising – and into IBM’s partnership with Red Hat which was developing its enterprise operating system, Red Hat Enterprise Linux. “It helped start a flurry of innovation that has never slowed,” said Jim Zemlin, executive director of the Linux Foundation. IBM claims that the investment would “help clients capitalize on big data and cloud computing with modern systems built to handle the new wave of applications coming to the data center in the post-PC era.” Some of the moolah will be plowed into the Power Systems Linux Center in Montpellier, France, which opened today. IBM’s first Power Systems Linux Center opened in Beijing in May. IBM may be trying to make hay of the ongoing revelations that have shown that the NSA and other intelligence organizations in the US and elsewhere have roped in American tech companies of all stripes with huge contracts to perfect a seamless spy network. They even include physical aspects of surveillance, such as license plate scanners and cameras, which are everywhere [read.... Surveillance Society: If You Drive, You Get Tracked].
  • Then another boon for IBM. Experts at the German Federal Office for Security in Information Technology (BIS) determined that Windows 8 is dangerous for data security. It allows Microsoft to control the computer remotely through a “special surveillance chip,” the wonderfully named Trusted Platform Module (TPM), and a backdoor in the software – with keys likely accessible to the NSA and possibly other third parties, such as the Chinese. Risks: “Loss of control over the operating system and the hardware” [read.... LEAKED: German Government Warns Key Entities Not To Use Windows 8 – Links The NSA.
  • It would be an enormous competitive advantage for an IBM salesperson to walk into a government or corporate IT department and sell Big Data servers that don’t run on Windows, but on Linux. With the Windows 8 debacle now in public view, IBM salespeople don’t even have to mention it. In the hope of stemming the pernicious revenue decline their employer has been suffering from, they can politely and professionally hype the security benefits of IBM’s systems and mention in passing the comforting fact that some of it would be developed in the Power Systems Linux Centers in Montpellier and Beijing. Alas, Linux too is tarnished. The backdoors are there, though the code can be inspected, unlike Windows code. And then there is Security-Enhanced Linux (SELinux), which was integrated into the Linux kernel in 2003. It provides a mechanism for supporting “access control” (a backdoor) and “security policies.” Who developed SELinux? Um, the NSA – which helpfully discloses some details on its own website (emphasis mine): The results of several previous research projects in this area have yielded a strong, flexible mandatory access control architecture called Flask. A reference implementation of this architecture was first integrated into a security-enhanced Linux® prototype system in order to demonstrate the value of flexible mandatory access controls and how such controls could be added to an operating system. The architecture has been subsequently mainstreamed into Linux and ported to several other systems, including the Solaris™ operating system, the FreeBSD® operating system, and the Darwin kernel, spawning a wide range of related work.
  • ...1 more annotation...
  • Among a slew of American companies who contributed to the NSA’s “mainstreaming” efforts: Red Hat. And IBM? Like just about all of our American tech heroes, it looks at the NSA and other agencies in the Intelligence Community as “the Customer” with deep pockets, ever increasing budgets, and a thirst for technology and data. Which brings us back to Windows 8 and TPM. A decade ago, a group was established to develop and promote Trusted Computing that governs how operating systems and the “special surveillance chip” TPM work together. And it too has been cooperating with the NSA. The founding members of this Trusted Computing Group, as it’s called facetiously: AMD, Cisco, Hewlett-Packard, Intel, Microsoft, and Wave Systems. Oh, I almost forgot ... and IBM. And so IBM might not escape, despite its protestations and slick sales presentations, the suspicion by foreign companies and governments alike that its Linux servers too have been compromised – like the cloud products of other American tech companies. And now, they’re going to pay a steep price for their cooperation with the NSA. Read...  NSA Pricked The “Cloud” Bubble For US Tech Companies
Gary Edwards

Google's Chrome Browser Sprouts Programming Kit of the Future "Node.js" | Wired Enterpr... - 1 views

  •  
    Good article describing Node.js.  The Node.js Summitt is taking place in San Francisco on Jan 24th - 25th.  http://goo.gl/AhZTD I'm wondering if anyone has used Node.js to create real time Cloud ready compound documents?  Replacing MSOffice OLE-ODBC-ActiveX heavy productivity documents, forms and reports with Node.js event widgets, messages and database connections?  I'm thinking along the lines of a Lotus Notes alternative with a Node.js enhanced version of EverNote on the front end, and Node.js-Hadoop productivity platform on the server side? Might have to contact Stephen O'Grady on this.  He is a featured speaker at the conference. excerpt: At first, Chito Manansala (Visa & Sabre) built his Internet transaction processing systems using the venerable Java programming language. But he has since dropped Java and switched to what is widely regarded as The Next Big Thing among Silicon Valley developers. He switched to Node. Node is short for Node.js, a new-age programming platform based on a software engine at the heart of Google's Chrome browser. But it's not a browser technology. It's meant to help build software that sits on a distant server somewhere, feeding an application to your PC or smartphone, and it's particularly suited to systems like the one Chito Manansala is building - systems that juggle scads of information streaming to and from other sources. In other words, it's suited to the modern internet. Two years ago, Node was just another open source project. But it has since grown into the development platform of the moment. At Yahoo!, Node underpins "Manhattan," a fledgling online service for building and hosting mobile applications. Microsoft is offering Node atop Windows Azure, its online service for building and hosting a much beefier breed of business application. And Sabre is just one of a host of big names using the open source platform to erect applications on their own servers. Node is based on the Javascript engine at th
Paul Merrell

Asia Times | Say hello to the Russia-China operating system | Article - 0 views

  • Google cuts Huawei off Android; so Huawei may migrate to Aurora. Call it mobile Eurasia integration; the evolving Russia-China strategic partnership may be on the verge of spawning its own operating system – and that is not a metaphor. Aurora is a mobile operating system currently developed by Russian Open Mobile Platform, based in Moscow. It is based on the Sailfish operating system, designed by Finnish technology company Jolla, which featured a batch of Russians in the development team. Quite a few top coders at Google and Apple also come from the former USSR – exponents of a brilliant scientific academy tradition.
  • Aurora could be regarded as part of Huawei’s fast-evolving Plan B. Huawei is now turbo-charging the development and implementation of its own operating system, HongMeng, a process that started no less than seven years ago. Most of the work on an operating system is writing drivers and APIs (application programming interfaces). Huawei would be able to integrate their code to the Russian system in no time.
  • No Google? Who cares? Tencent, Xiaomi, Vivo and Oppo are already testing the HongMeng operating system, as part of a batch of one million devices already distributed. HongMeng’s launch is still a closely guarded secret by Huawei, but according to CEO Richard Yu, it could happen even before the end of 2019 for the Chinese market, running on smartphones, computers, TVs and cars. HongMeng is rumored to be 60% faster than Android.
  • ...2 more annotations...
  • The HongMeng system may also harbor functions dedicated to security and protection of users’ data. That’s what’s scaring Google the most; Huawei developing a software impenetrable to hacking attempts. Google is actively lobbying the Trump administration to add another reprieve – or even abandon the Huawei ban altogether. By now it’s clear Team Trump has decided to wield a trade war as a geopolitical and geoeconomic weapon. They may have not calculated that other Chinese producers have the power to swing markets. Xiaomi, Oppo and Vivo, for instance, are not (yet) banned in the US market, and combined they sell more than Samsung. They could decide to move to Huawei’s operating system in no time.
  • The existence of Lineage operating system is proof that Huawei is not facing a lot of hurdles developing HongMeng – which will be compatible with all Android apps. There would be no problem to adopt Aurora as well. Huawei will certainly open is own app store to compete with Google Play.
Gary Edwards

It's Microsoft's Game to Lose with Windows Mobile 7 - PCWorld Business Center - 0 views

  •  
    Good title.  Nice to see that some of the tech media are starting to figure this out.  It's about time. excerpt: While Microsoft has struggled with its mobile operating system, it still occupies a dominant stake of the server operating system, desktop operating system, business productivity software, messaging, and Web browser markets. Bells and whistles aside, it's hard to argue with the potential of a smartphone platform that can seamlessly tie in with the platforms and tools that businesses rely on. RIM, Apple, Palm, and now Google, all recognize and respect Microsoft's presence in the enterprise. These other mobile platforms realize that integration with Microsoft backend tools--particularly Exchange Server--is imperative to success in the enterprise. No matter how hard they try, though, the solutions are often clumsy or cumbersome, and have a sort of "square peg in the round hole" feel to them. The core appeal of a Microsoft mobile operating system is the inclusion of native tools that naturally integrate with the existing server, desktop, and office productivity environment. Windows Mobile is uniquely suited to deliver a seamless and familiar experience for business professionals. Expecting Microsoft to introduce unique innovations or raise the bar in any way for mobile operating systems is probably a recipe for disappointment. Assuming that Microsoft can at least improve Windows Mobile to the point that Windows Phones are more or less on par with next-generation smartphones like the iPhone or Droid will be enough, though, for Microsoft to get the ship pointed in the right direction and begin to reclaim some of its lost mobile platform market share. Microsoft has a built-in audience and the game is Microsoft's to lose.
Gary Edwards

The State of the Internet Operating System - O'Reilly Radar - 0 views

  •  
    ... The Internet Operating System is an Information Operating System ... Search is key to managing and working "information" ... Media Access ... Communications ... Identity and the Social Graph ... Payment ... Advertising ... Location ... Activity Streams - "Attention" ... Time  ... Image and Speech Recognition ... Government Data ... The Browser Where is the "operating system" in all this? Clearly, it is still evolving. Applications use a hodgepodge of services from multiple different providers to get the information they need. But how different is this from PC application development in the early 1980s, when every application provider wrote their own device drivers to support the hodgepodge of disks, ports, keyboards, and screens that comprised the still emerging personal computer ecosystem? Along came Microsoft with an offer that was difficult to refuse: We'll manage the drivers; all application developers have to do is write software that uses the Win32 APIs, and all of the complexity will be abstracted away. This is the crux of my argument about the internet operating system. We are once again approaching the point at which the Faustian bargain will be made: simply use our facilities, and the complexity will go away. And much as happened during the 1980s, there is more than one company making that promise. We're entering a modern version of "the Great Game", the rivalry to control the narrow passes to the promised future of computing.
Paul Merrell

From Radio to Porn, British Spies Track Web Users' Online Identities - 0 views

  • HERE WAS A SIMPLE AIM at the heart of the top-secret program: Record the website browsing habits of “every visible user on the Internet.” Before long, billions of digital records about ordinary people’s online activities were being stored every day. Among them were details cataloging visits to porn, social media and news websites, search engines, chat forums, and blogs. The mass surveillance operation — code-named KARMA POLICE — was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ. The revelations about the scope of the British agency’s surveillance are contained in documents obtained by The Intercept from National Security Agency whistleblower Edward Snowden. Previous reports based on the leaked files have exposed how GCHQ taps into Internet cables to monitor communications on a vast scale, but many details about what happens to the data after it has been vacuumed up have remained unclear.
  • Amid a renewed push from the U.K. government for more surveillance powers, more than two dozen documents being disclosed today by The Intercept reveal for the first time several major strands of GCHQ’s existing electronic eavesdropping capabilities.
  • The surveillance is underpinned by an opaque legal regime that has authorized GCHQ to sift through huge archives of metadata about the private phone calls, emails and Internet browsing logs of Brits, Americans, and any other citizens — all without a court order or judicial warrant
  • ...17 more annotations...
  • A huge volume of the Internet data GCHQ collects flows directly into a massive repository named Black Hole, which is at the core of the agency’s online spying operations, storing raw logs of intercepted material before it has been subject to analysis. Black Hole contains data collected by GCHQ as part of bulk “unselected” surveillance, meaning it is not focused on particular “selected” targets and instead includes troves of data indiscriminately swept up about ordinary people’s online activities. Between August 2007 and March 2009, GCHQ documents say that Black Hole was used to store more than 1.1 trillion “events” — a term the agency uses to refer to metadata records — with about 10 billion new entries added every day. As of March 2009, the largest slice of data Black Hole held — 41 percent — was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.
  • Throughout this period, as smartphone sales started to boom, the frequency of people’s Internet use was steadily increasing. In tandem, British spies were working frantically to bolster their spying capabilities, with plans afoot to expand the size of Black Hole and other repositories to handle an avalanche of new data. By 2010, according to the documents, GCHQ was logging 30 billion metadata records per day. By 2012, collection had increased to 50 billion per day, and work was underway to double capacity to 100 billion. The agency was developing “unprecedented” techniques to perform what it called “population-scale” data mining, monitoring all communications across entire countries in an effort to detect patterns or behaviors deemed suspicious. It was creating what it said would be, by 2013, “the world’s biggest” surveillance engine “to run cyber operations and to access better, more valued data for customers to make a real world difference.”
  • A document from the GCHQ target analysis center (GTAC) shows the Black Hole repository’s structure.
  • The data is searched by GCHQ analysts in a hunt for behavior online that could be connected to terrorism or other criminal activity. But it has also served a broader and more controversial purpose — helping the agency hack into European companies’ computer networks. In the lead up to its secret mission targeting Netherlands-based Gemalto, the largest SIM card manufacturer in the world, GCHQ used MUTANT BROTH in an effort to identify the company’s employees so it could hack into their computers. The system helped the agency analyze intercepted Facebook cookies it believed were associated with Gemalto staff located at offices in France and Poland. GCHQ later successfully infiltrated Gemalto’s internal networks, stealing encryption keys produced by the company that protect the privacy of cell phone communications.
  • Similarly, MUTANT BROTH proved integral to GCHQ’s hack of Belgian telecommunications provider Belgacom. The agency entered IP addresses associated with Belgacom into MUTANT BROTH to uncover information about the company’s employees. Cookies associated with the IPs revealed the Google, Yahoo, and LinkedIn accounts of three Belgacom engineers, whose computers were then targeted by the agency and infected with malware. The hacking operation resulted in GCHQ gaining deep access into the most sensitive parts of Belgacom’s internal systems, granting British spies the ability to intercept communications passing through the company’s networks.
  • In March, a U.K. parliamentary committee published the findings of an 18-month review of GCHQ’s operations and called for an overhaul of the laws that regulate the spying. The committee raised concerns about the agency gathering what it described as “bulk personal datasets” being held about “a wide range of people.” However, it censored the section of the report describing what these “datasets” contained, despite acknowledging that they “may be highly intrusive.” The Snowden documents shine light on some of the core GCHQ bulk data-gathering programs that the committee was likely referring to — pulling back the veil of secrecy that has shielded some of the agency’s most controversial surveillance operations from public scrutiny. KARMA POLICE and MUTANT BROTH are among the key bulk collection systems. But they do not operate in isolation — and the scope of GCHQ’s spying extends far beyond them.
  • The agency operates a bewildering array of other eavesdropping systems, each serving its own specific purpose and designated a unique code name, such as: SOCIAL ANTHROPOID, which is used to analyze metadata on emails, instant messenger chats, social media connections and conversations, plus “telephony” metadata about phone calls, cell phone locations, text and multimedia messages; MEMORY HOLE, which logs queries entered into search engines and associates each search with an IP address; MARBLED GECKO, which sifts through details about searches people have entered into Google Maps and Google Earth; and INFINITE MONKEYS, which analyzes data about the usage of online bulletin boards and forums. GCHQ has other programs that it uses to analyze the content of intercepted communications, such as the full written body of emails and the audio of phone calls. One of the most important content collection capabilities is TEMPORA, which mines vast amounts of emails, instant messages, voice calls and other communications and makes them accessible through a Google-style search tool named XKEYSCORE.
  • As of September 2012, TEMPORA was collecting “more than 40 billion pieces of content a day” and it was being used to spy on people across Europe, the Middle East, and North Africa, according to a top-secret memo outlining the scope of the program. The existence of TEMPORA was first revealed by The Guardian in June 2013. To analyze all of the communications it intercepts and to build a profile of the individuals it is monitoring, GCHQ uses a variety of different tools that can pull together all of the relevant information and make it accessible through a single interface. SAMUEL PEPYS is one such tool, built by the British spies to analyze both the content and metadata of emails, browsing sessions, and instant messages as they are being intercepted in real time. One screenshot of SAMUEL PEPYS in action shows the agency using it to monitor an individual in Sweden who visited a page about GCHQ on the U.S.-based anti-secrecy website Cryptome.
  • Partly due to the U.K.’s geographic location — situated between the United States and the western edge of continental Europe — a large amount of the world’s Internet traffic passes through its territory across international data cables. In 2010, GCHQ noted that what amounted to “25 percent of all Internet traffic” was transiting the U.K. through some 1,600 different cables. The agency said that it could “survey the majority of the 1,600” and “select the most valuable to switch into our processing systems.”
  • According to Joss Wright, a research fellow at the University of Oxford’s Internet Institute, tapping into the cables allows GCHQ to monitor a large portion of foreign communications. But the cables also transport masses of wholly domestic British emails and online chats, because when anyone in the U.K. sends an email or visits a website, their computer will routinely send and receive data from servers that are located overseas. “I could send a message from my computer here [in England] to my wife’s computer in the next room and on its way it could go through the U.S., France, and other countries,” Wright says. “That’s just the way the Internet is designed.” In other words, Wright adds, that means “a lot” of British data and communications transit across international cables daily, and are liable to be swept into GCHQ’s databases.
  • A map from a classified GCHQ presentation about intercepting communications from undersea cables. GCHQ is authorized to conduct dragnet surveillance of the international data cables through so-called external warrants that are signed off by a government minister. The external warrants permit the agency to monitor communications in foreign countries as well as British citizens’ international calls and emails — for example, a call from Islamabad to London. They prohibit GCHQ from reading or listening to the content of “internal” U.K. to U.K. emails and phone calls, which are supposed to be filtered out from GCHQ’s systems if they are inadvertently intercepted unless additional authorization is granted to scrutinize them. However, the same rules do not apply to metadata. A little-known loophole in the law allows GCHQ to use external warrants to collect and analyze bulk metadata about the emails, phone calls, and Internet browsing activities of British people, citizens of closely allied countries, and others, regardless of whether the data is derived from domestic U.K. to U.K. communications and browsing sessions or otherwise. In March, the existence of this loophole was quietly acknowledged by the U.K. parliamentary committee’s surveillance review, which stated in a section of its report that “special protection and additional safeguards” did not apply to metadata swept up using external warrants and that domestic British metadata could therefore be lawfully “returned as a result of searches” conducted by GCHQ.
  • Perhaps unsurprisingly, GCHQ appears to have readily exploited this obscure legal technicality. Secret policy guidance papers issued to the agency’s analysts instruct them that they can sift through huge troves of indiscriminately collected metadata records to spy on anyone regardless of their nationality. The guidance makes clear that there is no exemption or extra privacy protection for British people or citizens from countries that are members of the Five Eyes, a surveillance alliance that the U.K. is part of alongside the U.S., Canada, Australia, and New Zealand. “If you are searching a purely Events only database such as MUTANT BROTH, the issue of location does not occur,” states one internal GCHQ policy document, which is marked with a “last modified” date of July 2012. The document adds that analysts are free to search the databases for British metadata “without further authorization” by inputing a U.K. “selector,” meaning a unique identifier such as a person’s email or IP address, username, or phone number. Authorization is “not needed for individuals in the U.K.,” another GCHQ document explains, because metadata has been judged “less intrusive than communications content.” All the spies are required to do to mine the metadata troves is write a short “justification” or “reason” for each search they conduct and then click a button on their computer screen.
  • Intelligence GCHQ collects on British persons of interest is shared with domestic security agency MI5, which usually takes the lead on spying operations within the U.K. MI5 conducts its own extensive domestic surveillance as part of a program called DIGINT (digital intelligence).
  • GCHQ’s documents suggest that it typically retains metadata for periods of between 30 days to six months. It stores the content of communications for a shorter period of time, varying between three to 30 days. The retention periods can be extended if deemed necessary for “cyber defense.” One secret policy paper dated from January 2010 lists the wide range of information the agency classes as metadata — including location data that could be used to track your movements, your email, instant messenger, and social networking “buddy lists,” logs showing who you have communicated with by phone or email, the passwords you use to access “communications services” (such as an email account), and information about websites you have viewed.
  • Records showing the full website addresses you have visited — for instance, www.gchq.gov.uk/what_we_do — are treated as content. But the first part of an address you have visited — for instance, www.gchq.gov.uk — is treated as metadata. In isolation, a single metadata record of a phone call, email, or website visit may not reveal much about a person’s private life, according to Ethan Zuckerman, director of Massachusetts Institute of Technology’s Center for Civic Media. But if accumulated and analyzed over a period of weeks or months, these details would be “extremely personal,” he told The Intercept, because they could reveal a person’s movements, habits, religious beliefs, political views, relationships, and even sexual preferences. For Zuckerman, who has studied the social and political ramifications of surveillance, the most concerning aspect of large-scale government data collection is that it can be “corrosive towards democracy” — leading to a chilling effect on freedom of expression and communication. “Once we know there’s a reasonable chance that we are being watched in one fashion or another it’s hard for that not to have a ‘panopticon effect,’” he said, “where we think and behave differently based on the assumption that people may be watching and paying attention to what we are doing.”
  • When compared to surveillance rules in place in the U.S., GCHQ notes in one document that the U.K. has “a light oversight regime.” The more lax British spying regulations are reflected in secret internal rules that highlight greater restrictions on how NSA databases can be accessed. The NSA’s troves can be searched for data on British citizens, one document states, but they cannot be mined for information about Americans or other citizens from countries in the Five Eyes alliance. No such constraints are placed on GCHQ’s own databases, which can be sifted for records on the phone calls, emails, and Internet usage of Brits, Americans, and citizens from any other country. The scope of GCHQ’s surveillance powers explain in part why Snowden told The Guardian in June 2013 that U.K. surveillance is “worse than the U.S.” In an interview with Der Spiegel in July 2013, Snowden added that British Internet cables were “radioactive” and joked: “Even the Queen’s selfies to the pool boy get logged.”
  • In recent years, the biggest barrier to GCHQ’s mass collection of data does not appear to have come in the form of legal or policy restrictions. Rather, it is the increased use of encryption technology that protects the privacy of communications that has posed the biggest potential hindrance to the agency’s activities. “The spread of encryption … threatens our ability to do effective target discovery/development,” says a top-secret report co-authored by an official from the British agency and an NSA employee in 2011. “Pertinent metadata events will be locked within the encrypted channels and difficult, if not impossible, to prise out,” the report says, adding that the agencies were working on a plan that would “(hopefully) allow our Internet Exploitation strategy to prevail.”
Paul Merrell

Huawei launches first product with own operating system - 0 views

  • Chinese telecom giant Huawei, which has been caught in the crossfires of the Washington-Beijing trade war, on Saturday unveiled a new smart television, the first product to use its own operating system.The television will be available from Thursday in China and marks the first use of HarmonyOS, chief executive George Zhao said, adding that it will be marketed by its mid-range brand, Honor.Huawei revealed its highly-anticipated HarmonyOS on Friday as an alternative operating system for phones and other smart devices in the event that looming US sanctions prevent the firm from using Android technology.American companies are theoretically no longer allowed to sell technology products to Huawei, but a three-month exemption period -- which ends next week -- was granted by Washington before the measure came into force.That ban could stop the tech giant from getting hold of key hardware and software, including smartphone chips and elements of the Google Android operating system, which runs the vast majority of smartphones in the world, including Huawei's.Huawei -- considered the world leader in fast fifth-generation or 5G equipment and the world's number two smartphone producer -- has been blacklisted by US President Donald Trump amid suspicions it provides a backdoor for Chinese intelligence services, which the firm denies.
Paul Merrell

Operation Socialist: How GCHQ Spies Hacked Belgium's Largest Telco - 0 views

  • When the incoming emails stopped arriving, it seemed innocuous at first. But it would eventually become clear that this was no routine technical problem. Inside a row of gray office buildings in Brussels, a major hacking attack was in progress. And the perpetrators were British government spies. It was in the summer of 2012 that the anomalies were initially detected by employees at Belgium’s largest telecommunications provider, Belgacom. But it wasn’t until a year later, in June 2013, that the company’s security experts were able to figure out what was going on. The computer systems of Belgacom had been infected with a highly sophisticated malware, and it was disguising itself as legitimate Microsoft software while quietly stealing data. Last year, documents from National Security Agency whistleblower Edward Snowden confirmed that British surveillance agency Government Communications Headquarters was behind the attack, codenamed Operation Socialist. And in November, The Intercept revealed that the malware found on Belgacom’s systems was one of the most advanced spy tools ever identified by security researchers, who named it “Regin.”
  • The full story about GCHQ’s infiltration of Belgacom, however, has never been told. Key details about the attack have remained shrouded in mystery—and the scope of the attack unclear. Now, in partnership with Dutch and Belgian newspapers NRC Handelsblad and De Standaard, The Intercept has pieced together the first full reconstruction of events that took place before, during, and after the secret GCHQ hacking operation. Based on new documents from the Snowden archive and interviews with sources familiar with the malware investigation at Belgacom, The Intercept and its partners have established that the attack on Belgacom was more aggressive and far-reaching than previously thought. It occurred in stages between 2010 and 2011, each time penetrating deeper into Belgacom’s systems, eventually compromising the very core of the company’s networks.
  • When the incoming emails stopped arriving, it seemed innocuous at first. But it would eventually become clear that this was no routine technical problem. Inside a row of gray office buildings in Brussels, a major hacking attack was in progress. And the perpetrators were British government spies. It was in the summer of 2012 that the anomalies were initially detected by employees at Belgium’s largest telecommunications provider, Belgacom. But it wasn’t until a year later, in June 2013, that the company’s security experts were able to figure out what was going on. The computer systems of Belgacom had been infected with a highly sophisticated malware, and it was disguising itself as legitimate Microsoft software while quietly stealing data. Last year, documents from National Security Agency whistleblower Edward Snowden confirmed that British surveillance agency Government Communications Headquarters was behind the attack, codenamed Operation Socialist. And in November, The Intercept revealed that the malware found on Belgacom’s systems was one of the most advanced spy tools ever identified by security researchers, who named it “Regin.”
  • ...7 more annotations...
  • Snowden told The Intercept that the latest revelations amounted to unprecedented “smoking-gun attribution for a governmental cyber attack against critical infrastructure.” The Belgacom hack, he said, is the “first documented example to show one EU member state mounting a cyber attack on another…a breathtaking example of the scale of the state-sponsored hacking problem.”
  • Publicly, Belgacom has played down the extent of the compromise, insisting that only its internal systems were breached and that customers’ data was never found to have been at risk. But secret GCHQ documents show the agency gained access far beyond Belgacom’s internal employee computers and was able to grab encrypted and unencrypted streams of private communications handled by the company. Belgacom invested several million dollars in its efforts to clean-up its systems and beef-up its security after the attack. However, The Intercept has learned that sources familiar with the malware investigation at the company are uncomfortable with how the clean-up operation was handled—and they believe parts of the GCHQ malware were never fully removed.
  • The revelations about the scope of the hacking operation will likely alarm Belgacom’s customers across the world. The company operates a large number of data links internationally (see interactive map below), and it serves millions of people across Europe as well as officials from top institutions including the European Commission, the European Parliament, and the European Council. The new details will also be closely scrutinized by a federal prosecutor in Belgium, who is currently carrying out a criminal investigation into the attack on the company. Sophia in ’t Veld, a Dutch politician who chaired the European Parliament’s recent inquiry into mass surveillance exposed by Snowden, told The Intercept that she believes the British government should face sanctions if the latest disclosures are proven.
  • What sets the secret British infiltration of Belgacom apart is that it was perpetrated against a close ally—and is backed up by a series of top-secret documents, which The Intercept is now publishing.
  • Between 2009 and 2011, GCHQ worked with its allies to develop sophisticated new tools and technologies it could use to scan global networks for weaknesses and then penetrate them. According to top-secret GCHQ documents, the agency wanted to adopt the aggressive new methods in part to counter the use of privacy-protecting encryption—what it described as the “encryption problem.” When communications are sent across networks in encrypted format, it makes it much harder for the spies to intercept and make sense of emails, phone calls, text messages, internet chats, and browsing sessions. For GCHQ, there was a simple solution. The agency decided that, where possible, it would find ways to hack into communication networks to grab traffic before it’s encrypted.
  • The Snowden documents show that GCHQ wanted to gain access to Belgacom so that it could spy on phones used by surveillance targets travelling in Europe. But the agency also had an ulterior motive. Once it had hacked into Belgacom’s systems, GCHQ planned to break into data links connecting Belgacom and its international partners, monitoring communications transmitted between Europe and the rest of the world. A map in the GCHQ documents, named “Belgacom_connections,” highlights the company’s reach across Europe, the Middle East, and North Africa, illustrating why British spies deemed it of such high value.
  • Documents published with this article: Automated NOC detection Mobile Networks in My NOC World Making network sense of the encryption problem Stargate CNE requirements NAC review – October to December 2011 GCHQ NAC review – January to March 2011 GCHQ NAC review – April to June 2011 GCHQ NAC review – July to September 2011 GCHQ NAC review – January to March 2012 GCHQ Hopscotch Belgacom connections
Paul Merrell

Legislative Cyber Threats: CISA's Not The Only One | Just Security - 0 views

  • If anyone in the United States Senate had any doubts that the proposed Cyber Information Sharing Act (CISA) was universally hated by a range of civil society groups, a literal blizzard of faxes should’ve cleared up the issue by now. What’s not getting attention is a CISA “alternative” introduced last week by Sens. Mark Warner (D-Va) and Susan Collins (R-Me). Dubbed the “FISMA Reform Act,” the authors make the following claims about the bill:  This legislation would allow the Secretary of Homeland Security to operate intrusion detection and prevention capabilities on all federal agencies on the .gov domain. The bipartisan bill would also direct the Secretary of Homeland Security to conduct risk assessments of any network within the government domain. The bill would allow the Secretary of Homeland Security to operate defensive countermeasures on these networks once a cyber threat has been detected. The legislation would strengthen and streamline the authority Congress gave to DHS last year to issue binding operational directives to federal agencies, especially to respond to substantial cyber security threats in emergency circumstances.
  • The bill would require the Office of Management and Budget to report to Congress annually on the extent to which OMB has exercised its existing authority to enforce government wide cyber security standards. On the surface, it actually sounds like a rational response to the disastrous OPM hack. Unfortunately, the Warner-Collins bill has some vague or problematic language and non-existent definitions that make it potentially just as dangerous for data security and privacy as CISA. The bill would allow the Secretary of Homeland Security to carry out cyber security activities “in conjunction with other agencies and the private sector” [for] “assessing and fostering the development of information security technologies and capabilities for use across multiple agencies.” While the phrase “information sharing” is not present in this subsection, “security technologies and capabilities” is more than broad — and vague — enough to allow it.
  • The bill would also allow the secretary to “acquire, intercept, retain, use, and disclose communications and other system traffic that are transiting to or from or stored on agency information systems and deploy countermeasures with regard to the communications and system traffic.”
  • ...2 more annotations...
  • The bill also allows the head of a federal agency or department “to disclose to the Secretary or a private entity providing assistance to the Secretary…information traveling to or from or stored on an agency information system, notwithstanding any other law that would otherwise restrict or prevent agency heads from disclosing such information to the Secretary.” (Emphasis added.) So confidential, proprietary or other information otherwise precluded from disclosure under laws like HIPAA or the Privacy Act get waived if the Secretary of DHS or an agency head feel that your email needs to be shared with a government contracted outfit like the Hacking Team for analysis. And the bill explicitly provides for just this kind of cyber threat analysis outsourcing:
  • (3) PRIVATE ENTITIES. — The Secretary may enter into contracts or other agreements, or otherwise request and obtain the assistance of, private entities that provide electronic communication or information security services to acquire, intercept, retain, use, and disclose communications and other system traffic in accordance with this subsection. The bill further states that the content of your communications, will be retained only if the communication is associated with a known or reasonably suspected information security threat, and communications and system traffic will not be subject to the operation of a countermeasure unless associated with the threats. (Emphasis added.) “Reasonably suspected” is about as squishy a definition as one can find.
  •  
    "The bill also allows the head of a federal agency or department "to disclose to the Secretary or a private entity providing assistance to the Secretary…information traveling to or from or stored on an agency information system, notwithstanding any other law that would otherwise restrict or prevent agency heads from disclosing such information to the Secretary."" Let's see: if your information is intercepted by the NSA and stored on its "information system" in Bluffdale, Utah, then it can be disclosed to the Secretary of DHS or any private entity providing him/her with assistance, "notwithstanding any other law that would otherwise restrict or prevent agency heads from disclosing such information to the Secretary." And if NSA just happens to be intercepting every digital bit of data generated or received in the entire world, including the U.S., then it's all in play, "notwithstanding any other law that would otherwise restrict or prevent agency heads from disclosing such information to the Secretary.". Sheesh! Our government voyeurs never stop trying to get more nude pix and videos to view.  
Gary Edwards

Method for invoking UOML instructions - Patent application - Embodiments of the present... - 1 views

  •  
    Patent application filed on OASIS UOML access by API. 0002]The present invention relates to electronic document processing technologies, and particularly to a method for encapsulating Unstructured Operation Markup Language (UOML) into an Application Programming Interface (API).  BACKGROUND OF THE INVENTION  [0003]The UOML standard includes a series of docbase management system instructions defined according to a format of "action+object" in Extensible Markup Language (XML), which has been explained in detail in an UOML Standard published by of the Organization for the Advancement of Structured Information Standards (OASIS ). Since XML works across different platforms and with different languages, the UOML standard can enable the docbase management system instructions to be exchanged across the different platforms in the different languages. However, in practical applications, operations on a docbase are usually controlled by using programs written in programming languages, hence the programs need to parse and process UOML XML texts. If every application developer designs his/her own way of parsing and processing UOML XML texts in his/her programs, the workload of coding will increase significantly and the efficiency of coding will drop sharply.  SUMMARY OF THE INVENTION  [0004]The objective of the present invention is to provide a method for encapsulating Unstructured Operation Markup Language (UOML) into an Application Programming Interface (API) of a programming language so as to improve the development efficiency of docbase management system application developers.  [0005]The method provided by the present invention for encapsulating UOML into an API includes:  Read more: http://www.faqs.org/patents/app/20090187927#ixzz0xVS2ZUSr
Gary Edwards

Adeptol Viewing Technology Features - 0 views

  •  
    Quick LinksGet a TrialEnterprise On DemandEnterprise On PremiseFAQHelpContact UsWhy Adeptol?Document SupportSupport for more than 300 document types out of boxNot a Virtual PrinterMultitenant platform for high end document viewingNo SoftwaresNo need to install any additional softwares on serverNo ActiveX/PluginsNo plugins or active x or applets need to be downloaded on client side.Fully customizableAdvanced API offers full customization and UI changes.Any OS/Any Prog LanguageInstall Adeptol Server on any OS and integrate with any programming language.AwardsAdeptol products receive industry awards and accolades year after year  View a DemoAttend a WebcastContact AdeptolView a Success StoryNo ActiveX, No Plug-in, No Software's to download. Any OS, Any Browser, Any Programming Language. That is the Power of Adeptol. Adeptol can help you retain your customers and streamline your content integration efforts. Leverage Web 2.0 technologies to get a completely scalable content viewer that easily handles any type of content in virtually unlimited volume, with additional capabilities to support high-volume transaction and archive environments. Our enterprise-class infrastructure was built to meet the needs of the world's most demanding global enterprises. Based on AJAX technology you can easily integrate the viewer into your application with complete ease. Support for all Server PlatformsCan be installed on Windows   (32bit/64bit) Server and Linux   (32bit/64bit) Server. Click here to see technical specifications.Integrate with any programming languageWhether you work in .net, c#, php, cold fusion or JSP. Adeptol Viewer can be integrated easily in any programming language using the easy API set. It also comes with sample code for all languages to get you started.Compatibility with more than 99% of the browsersTested & verified for compatibility with 99% of the various browsers on different platforms. Click here to see browser compatibility report.More than 300 Document T
Gary Edwards

WE'RE BLOWN AWAY: This Startup Could Literally Change The Entire Software Industry - Bu... - 0 views

  •  
    "Startup Numecent has come out of stealth mode today with some of the most impressive enterprise technology we've seen in a decade. Plus the company is interesting for other reasons, like its business model and its founder. Numecent offers something it calls "cloud paging" and, if successful, it could be a game-changer for enterprise software, video gaming, and smartphone apps. Red Hat thinks so. It has already partnered with the company to help it offer Windows software to Linux users. "Cloud paging" instantly "cloudifies" any software, even an operating system like Windows itself, says founder and CEO Osman Kent. It lets any software, with no modification, be delivered from the cloud and run as fast or faster than if the app was on your desktop. Lots of so-called "desktop virtualization" services work fast. But cloud-paging can even operate the cloud software if the PC gets disconnected from the network or Internet. It can also turn a smartphone into a server. That means a bunch of devices like tablets can run the software -- like a game -- off of the smartphone. Imagine showing up to a party and letting all your friends play the latest version of Halo from your phone. That's crazy cool. Cloudpaging can do all this because it doesn't use "pixel-streaming" technology like other virtualization tech. Instead it temporarily downloads bits of the application itself (instructions) and runs them on the device. It can almost magically predict which parts of the app the user will need, and downloads only those parts. For business owners, that's not even the best part. It also helps enterprises sidestep extra licensing fees associated with the cloud. For instance, Microsoft licenses its software by the device, not by the user, and, in many cases, charges a "Virtual Desktop Access" fee for each device using a virtual version of Windows. (For a bit of light reading, check out the Microsoft virtual desktop licensing white paper: PDF) Cloudpaging has what Kent calls "f
Gary Edwards

Microsoft vs. EU Case T-201/04 WorkGroup Servers and Interoperability - 0 views

  •  
    JUDGMENT OF THE COURT OF FIRST INSTANCE (Grand Chamber) 17 September 2007 (*) (Competition - Abuse of dominant position - Client PC operating systems - Work group server operating systems - Streaming media players - Decision finding infringements of Article 82 EC - Refusal of the dominant undertaking to supply and authorise the use of interoperability information - Supply by the dominant undertaking of its client PC operating system conditional on the simultaneous acquisition of its media player - Remedies - Appointment of an independent monitoring trustee - Fine - Determination of the amount - Proportionality) In Case T-201/04, Microsoft Corp., established in Redmond, Washington (United States), represented by J.-F. Bellis, lawyer, and I. Forrester QC,
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Gary Edwards

RuleLab.Net Server: Web system for design, implementation and management of business pr... - 0 views

  •  
    RuleLab.Net is a web-based system for designing and implementing the business rules that operate on an application's XML data. Extend your existing applications by adding Rule building and Business Rules Engine (BRE) capabilities. Consolidate your business logic in an easy to read format, build, test, share, and deploy your Rules using the web browser; and integrate them into your system via the BRE. Intuitive GUI, English-like syntax, and centralized repository empower business users with direct access to the Rules.In the RuleLab.Net system, Business Rules are composed and managed over the Internet or Intranet using the web-based Rules Designer. It allows users to associate an application XML data template with Rules, create a vocabulary of natural terms, graphically build complex logical expressions, test the Rules on data samples, and store the Rules in a database. Features include strong data types, reasoning, rule priorities and dependencies, calculation formulas, looping-data-structure support, and a built-in set of computational, aggregate and other data processing functions. Rules and other system objects are stored in XML files that can be downloaded, modified, and uploaded to the online repository. Rule changes made online can be instantly deployed for runtime use by the applications integrated with the BRE. The forward chaining BRE parses XML application data against the ruleset, updates your data XML document, and returns it back to the application along with the comprehensive state information. Written in .NET, the BRE component can be utilized as a managed assembly, a COM object, or through the Web Service.
Gary Edwards

New Adobe Air 2.0 Released : ISEdb.COM - 0 views

  •  
    Is Adobe AiR a Virtual Desktop?  We expect a VD to run an alien OS and those OS specific applications.  With AiR 2.0 it seems Adobe has ditched the "OS" component of a VD, and the OS specific applications, but is quite capable of running AiR based applications and information services that would otherwise have been designed for a specific OS environment.   Another way of looking at this would be to say that VD's are designed to run existing OS and OS specific applications, while AiR is desinged to run newly written OS independent applications that have one very important advantage over legacy applications and information systems;  AiR speaks the language of the Web 3.0.   This is WebKit HTML5-CSS3 with an advanced but Air specific version of JavaScript called "ActionScript".  What Adobe doesn't do is provide support for other critically important aspects of the WebKit interactive Web 3.0 model: support for Canvas/SVG!  Adobe continues to push the proprietary SWF interactive vector graphics format.   Note that Microsoft's Silverlight universal runtime does not support anything in the WebKit Web 3.0 model!  It's all proprietary. excerpt: For the first time since 2007, Adobe has updated its Air platform, released recently in beta with a slew of new features. The features include support for detection of mass storage devices, advanced networking capabilities, ability to open a file with its default application, improved cross-platform printing, and a bunch of other things that you probably won't really notice in any other way other than your Adobe working significantly more efficiently and smoothly than before. The 2.0 version of Air also will be able to support HTML5 and CSS3, due to an upgrade of its WebKit. Developers will also be happy to know that they can create Air applications that can be installed through a native installer. Air's changes have seen it morph into something of an 'operating system sitting on an operating system'. According
Paul Merrell

Hacking Team Asks Customers to Stop Using Its Software After Hack | Motherboard - 0 views

  • But the hack hasn’t just ruined the day for Hacking Team’s employees. The company, which sells surveillance software to government customers all over the world, from Morocco and Ethiopia to the US Drug Enforcement Agency and the FBI, has told all its customers to shut down all operations and suspend all use of the company’s spyware, Motherboard has learned. “They’re in full on emergency mode,” a source who has inside knowledge of Hacking Team’s operations told Motherboard.
  • Hacking Team notified all its customers on Monday morning with a “blast email,” requesting them to shut down all deployments of its Remote Control System software, also known as Galileo, according to multiple sources. The company also doesn’t have access to its email system as of Monday afternoon, a source said. On Sunday night, an unnamed hacker, who claimed to be the same person who breached Hacking Team’s competitor FinFisher last year, hijacked its Twitter account and posted links to 400GB of internal data. Hacking Team woke up to a massive breach of its systems.
  • A source told Motherboard that the hackers appears to have gotten “everything,” likely more than what the hacker has posted online, perhaps more than one terabyte of data. “The hacker seems to have downloaded everything that there was in the company’s servers,” the source, who could only speak on condition of anonymity, told Motherboard. “There’s pretty much everything here.” It’s unclear how the hackers got their hands on the stash, but judging from the leaked files, they broke into the computers of Hacking Team’s two systems administrators, Christian Pozzi and Mauro Romeo, who had access to all the company’s files, according to the source. “I did not expect a breach to be this big, but I’m not surprised they got hacked because they don’t take security seriously,” the source told me. “You can see in the files how much they royally fucked up.”
  • ...2 more annotations...
  • For example, the source noted, none of the sensitive files in the data dump, from employees passports to list of customers, appear to be encrypted. “How can you give all the keys to your infrastructure to a 20-something who just joined the company?” he added, referring to Pozzi, whose LinkedIn shows he’s been at Hacking Team for just over a year. “Nobody noticed that someone stole a terabyte of data? You gotta be a fuckwad,” the source said. “It means nobody was taking care of security.”
  • The future of the company, at this point, it’s uncertain. Employees fear this might be the beginning of the end, according to sources. One current employee, for example, started working on his resume, a source told Motherboard. It’s also unclear how customers will react to this, but a source said that it’s likely that customers from countries such as the US will pull the plug on their contracts. Hacking Team asked its customers to shut down operations, but according to one of the leaked files, as part of Hacking Team’s “crisis procedure,” it could have killed their operations remotely. The company, in fact, has “a backdoor” into every customer’s software, giving it ability to suspend it or shut it down—something that even customers aren’t told about. To make matters worse, every copy of Hacking Team’s Galileo software is watermarked, according to the source, which means Hacking Team, and now everyone with access to this data dump, can find out who operates it and who they’re targeting with it.
1 - 20 of 130 Next › Last »
Showing 20 items per page